Goto

Collaborating Authors

 Western Cape




Robot Talk Episode 140 – Robot balance and agility, with Amir Patel

Robohub

Amir Patel is an Associate Professor of Robotics & AI in the Department of Computer Science at University College London (UCL). His research uses robotics methods--sensor fusion, computer vision, mechanical modelling, and optimal control--to understand and quantify animal locomotion, especially high-speed predators such as the cheetah, and to translate these insights into bio-inspired machines. Previously, he served on the faculty of Electrical Engineering at the University of Cape Town, where he founded and directed the African Robotics Unit (ARU). Robot Talk is a weekly podcast that explores the exciting world of robotics, artificial intelligence and autonomous machines. Robot Talk is a weekly podcast that explores the exciting world of robotics, artificial intelligence and autonomous machines.

  Country:
  Industry: Leisure & Entertainment > Sports > Soccer (0.36)

Toxic 'forever chemicals' linked to cancer now associated with major pregnancy complication

Daily Mail - Science & tech

Senator accused of steamy affair with her bodyguard in bombshell lawsuit from his WIFE: 'Bring MDMA so I can guide you' Socialite who accused playboy twins of sex attack at Hamptons'castle' is found dead in unexplained circumstances Amy Schumer's friends reveal true meaning of thin bikini pictures and why they're'monitoring her'... as depth of ex Chris Fischer's heartbreak is laid bare Hunter Biden's stripper baby mama asks for him to be ARRESTED over claims he is still failing to pay her child support Ellen Greenberg's fiancé Sam Goldberg breaks cover as feds reopen probe into her'suicide'... and late teacher's mother shares incredible sign sent from beyond the grave Nicole Richie addresses her daughter's new identity after unveiling transformation on her 18th birthday '90s Vogue model Niki Taylor looks amazing as she sizzles at age 50 for new campaign Karoline Leavitt reveals the thinking behind Trump's call to cancel elections Family of Tyler Robinson's transgender lover speaks ...


A Theoretical and Empirical Taxonomy of Imbalance in Binary Classification

Essomba, Rose Yvette Bandolo, Fokoué, Ernest

arXiv.org Machine Learning

Class imbalance significantly degrades classification performance, yet its effects are rarely analyzed from a unified theoretical perspective. We propose a principled framework based on three fundamental scales: the imbalance coefficient $η$, the sample--dimension ratio $κ$, and the intrinsic separability $Δ$. Starting from the Gaussian Bayes classifier, we derive closed-form Bayes errors and show how imbalance shifts the discriminant boundary, yielding a deterioration slope that predicts four regimes: Normal, Mild, Extreme, and Catastrophic. Using a balanced high-dimensional genomic dataset, we vary only $η$ while keeping $κ$ and $Δ$ fixed. Across parametric and non-parametric models, empirical degradation closely follows theoretical predictions: minority Recall collapses once $\log(η)$ exceeds $Δ\sqrtκ$, Precision increases asymmetrically, and F1-score and PR-AUC decline in line with the predicted regimes. These results show that the triplet $(η,κ,Δ)$ provides a model-agnostic, geometrically grounded explanation of imbalance-induced deterioration.


Harmonizing Community Science Datasets to Model Highly Pathogenic Avian Influenza (HPAI) in Birds in the Subantarctic

Littauer, Richard, Bubendorfer, Kris

arXiv.org Artificial Intelligence

Community science observational datasets are useful in epidemiology and ecology for modeling species distributions, but the heterogeneous nature of the data presents significant challenges for standardization, data quality assurance and control, and workflow management. In this paper, we present a data workflow for cleaning and harmonizing multiple community science datasets, which we implement in a case study using eBird, iNaturalist, GBIF, and other datasets to model the impact of highly pathogenic avian influenza in populations of birds in the subantarctic. We predict population sizes for several species where the demographics are not known, and we present novel estimates for potential mortality rates from HPAI for those species, based on a novel aggregated dataset of mortality rates in the subantarctic.


FRIEDA: Benchmarking Multi-Step Cartographic Reasoning in Vision-Language Models

Pyo, Jiyoon, Jiao, Yuankun, Jung, Dongwon, Li, Zekun, Jang, Leeje, Kirsanova, Sofia, Kim, Jina, Lin, Yijun, Liu, Qin, Xie, Junyi, Askari, Hadi, Xu, Nan, Chen, Muhao, Chiang, Yao-Yi

arXiv.org Artificial Intelligence

Cartographic reasoning is the skill of interpreting geographic relationships by aligning legends, map scales, compass directions, map texts, and geometries across one or more map images. Although essential as a concrete cognitive capability and for critical tasks such as disaster response and urban planning, it remains largely unevaluated. Building on progress in chart and infographic understanding, recent large vision language model studies on map visual question-answering often treat maps as a special case of charts. In contrast, map VQA demands comprehension of layered symbology (e.g., symbols, geometries, and text labels) as well as spatial relations tied to orientation and distance that often span multiple maps and are not captured by chart-style evaluations. To address this gap, we introduce FRIEDA, a benchmark for testing complex open-ended cartographic reasoning in LVLMs. FRIEDA sources real map images from documents and reports in various domains and geographical areas. Following classifications in Geographic Information System (GIS) literature, FRIEDA targets all three categories of spatial relations: topological (border, equal, intersect, within), metric (distance), and directional (orientation). All questions require multi-step inference, and many require cross-map grounding and reasoning. We evaluate eleven state-of-the-art LVLMs under two settings: (1) the direct setting, where we provide the maps relevant to the question, and (2) the contextual setting, where the model may have to identify the maps relevant to the question before reasoning. Even the strongest models, Gemini-2.5-Pro and GPT-5-Think, achieve only 38.20% and 37.20% accuracy, respectively, far below human performance of 84.87%. These results reveal a persistent gap in multi-step cartographic reasoning, positioning FRIEDA as a rigorous benchmark to drive progress on spatial intelligence in LVLMs.


HPC-Driven Modeling with ML-Based Surrogates for Magnon-Photon Dynamics in Hybrid Quantum Systems

Song, Jialin, Tang, Yingheng, Ren, Pu, Takayoshi, Shintaro, Sawant, Saurabh, Zhu, Yujie, Hu, Jia-Mian, Nonaka, Andy, Mahoney, Michael W., Erichson, Benjamin, Yao, Zhi

arXiv.org Artificial Intelligence

Simulating hybrid magnonic quantum systems remains a challenge due to the large disparity between the timescales of the two systems. We present a massively parallel GPU-based simulation framework that enables fully coupled, large-scale modeling of on-chip magnon-photon circuits. T o accelerate design workflows, we develop a physics-informed machine learning surrogate trained on the simulation data, reducing computational cost while maintaining accuracy. This combined approach reveals real-time energy exchange dynamics and reproduces key phenomena such as anti-crossing behavior and the suppression of ferromagnetic resonance under strong electromagnetic fields. By addressing the multiscale and multiphysics challenges in magnon-photon modeling, our framework enables scalable simulation and rapid prototyping of next-generation quantum and spintronic devices. 1 Introduction Hybrid quantum systems, which combine distinct physical platforms, are a promising route toward advanced quantum technologies, as they harness strong interactions that may not be readily achievable in a single platform [1, 2]. These systems take many forms, coupling any two (or more) quantum platforms -- for example, superconducting qubits [3, 4], microwave resonators [5], single spins [6], spin ensembles [4, 7-9], or mechanical resonators [10-12] -- to harness strong interactions. These heterogeneous systems leverage complementary advantages of each component, but their rich multi-physics interactions pose formidable modeling challenges. A prominent example is cavity magnonics, where collective spin excitations (magnons) couple with microwave photons in a resonant cavity to form hybrid magnon-polariton modes when tuned into resonance [13-15]. These states are essential for quantum operations such as mode swapping [16, 17], quantum state storage [4, 18, 19], and dynamic control of energy exchange [19, 20]. The hallmark experimental signature of strong magnon-photon coupling is a pronounced avoided crossing (mode splitting) in the frequency spectrum, in agreement with theoretical predictions [21] and observed in many 3D [13, 22] and on-chip 2D [7, 8, 23] cavity based systems.


JaxWildfire: A GPU-Accelerated Wildfire Simulator for Reinforcement Learning

Çakır, Ufuk, Darvariu, Victor-Alexandru, Lacerda, Bruno, Hawes, Nick

arXiv.org Artificial Intelligence

Artificial intelligence methods are increasingly being explored for managing wildfires and other natural hazards. In particular, reinforcement learning (RL) is a promising path towards improving outcomes in such uncertain decision-making scenarios and moving beyond reactive strategies. However, training RL agents requires many environment interactions, and the speed of existing wildfire simulators is a severely limiting factor. We introduce $\texttt{JaxWildfire}$, a simulator underpinned by a principled probabilistic fire spread model based on cellular automata. It is implemented in JAX and enables vectorized simulations using $\texttt{vmap}$, allowing high throughput of simulations on GPUs. We demonstrate that $\texttt{JaxWildfire}$ achieves 6-35x speedup over existing software and enables gradient-based optimization of simulator parameters. Furthermore, we show that $\texttt{JaxWildfire}$ can be used to train RL agents to learn wildfire suppression policies. Our work is an important step towards enabling the advancement of RL techniques for managing natural hazards.


Hallucination reduction with CASAL: Contrastive Activation Steering For Amortized Learning

Wannan, null, Yang, null, Qiu, Xinchi, Yu, Lei, Zhang, Yuchen, Yang, Aobo, Kokhlikyan, Narine, Cancedda, Nicola, Garcia-Olano, Diego

arXiv.org Artificial Intelligence

Large Language Models (LLMs) exhibit impressive capabilities but often hallucinate, confidently providing incorrect answers instead of admitting ignorance. Prior work has shown that models encode linear representations of their own knowledge and that activation steering can reduce hallucinations. These approaches, however, require real-time monitoring and intervention during inference. We introduce Contrastive Activation Steering for Amortized Learning (CASAL), an efficient algorithm that connects interpretability with amortized optimization. CASAL directly bakes the benefits of activation steering into model's weights. Once trained, LLMs answer questions they know while abstaining from answering those they do not. CASAL's light-weight design requires training only a submodule of a single transformer layer and yet reduces hallucination by 30%-40% across multiple short-form QA benchmarks. CASAL is 30x more compute-efficient and 20x more data-efficient than strong LoRA-based baselines such as SFT and DPO, boosting its practical applicability in data scarce domains. Importantly, CASAL also generalizes effectively to out-of-distribution (OOD) domains. We showcase CASAL's flexibility in mitigating hallucinations in both text-only and vision-language models. To our knowledge, CASAL is the first steering-based training method that has been shown to be effective for both dense and Mixture-of-Experts (MoE) models. CASAL represents a promising step forward for applying interpretability-inspired method for practical deployment in production systems.